Goto

Collaborating Authors

 relay node


Go With The Flow: Churn-Tolerant Decentralized Training of Large Language Models

Blagoev, Nikolay, Cox, Bart, Decouchant, Jérémie, Chen, Lydia Y.

arXiv.org Artificial Intelligence

Motivated by the emergence of large language models (LLMs) and the importance of democratizing their training, we propose GWTF, the first crash tolerant practical decentralized training framework for LLMs. Differently from existing distributed and federated training frameworks, GWTF enables the efficient collaborative training of a LLM on heterogeneous clients that volunteer their resources. In addition, GWTF addresses node churn, i.e., clients joining or leaving the system at any time, and network instabilities, i.e., network links becoming unstable or unreliable. The core of GWTF is a novel decentralized flow algorithm that finds the most effective routing that maximizes the number of microbatches trained with the lowest possible delay. We extensively evaluate GWTF on GPT-like and LLaMa-like models and compare it against the prior art. Our results indicate that GWTF reduces the training time by up to 45% in realistic and challenging scenarios that involve heterogeneous client nodes distributed over 10 different geographic locations with a high node churn rate.


Latency Optimization for Wireless Federated Learning in Multihop Networks

Shaon, Shaba, Nguyen, Van-Dinh, Nguyen, Dinh C.

arXiv.org Artificial Intelligence

In this paper, we study a novel latency minimization problem in wireless federated learning (FL) across multi-hop networks. The system comprises multiple routes, each integrating leaf and relay nodes for FL model training. We explore a personalized learning and adaptive aggregation-aware FL (PAFL) framework that effectively addresses data heterogeneity across participating nodes by harmonizing individual and collective learning objectives. We formulate an optimization problem aimed at minimizing system latency through the joint optimization of leaf and relay nodes, as well as relay routing indicator. We also incorporate an additional energy harvesting scheme for the relay nodes to help with their relay tasks. This formulation presents a computationally demanding challenge, and thus we develop a simple yet efficient algorithm based on block coordinate descent and successive convex approximation (SCA) techniques. Simulation results illustrate the efficacy of our proposed joint optimization approach for leaf and relay nodes with relay routing indicator. We observe significant latency savings in the wireless multi-hop PAFL system, with reductions of up to 69.37% compared to schemes optimizing only one node type, traditional greedy algorithm, and scheme without relay routing indicator.


Over-the-Air Inference over Multi-hop MIMO Networks

Bian, Chenghong, Hua, Meng, Gunduz, Deniz

arXiv.org Artificial Intelligence

A novel over-the-air machine learning framework over multi-hop multiple-input and multiple-output (MIMO) networks is proposed. The core idea is to imitate fully connected (FC) neural network layers using multiple MIMO channels by carefully designing the precoding matrices at the transmitting nodes. A neural network dubbed PrototypeNet is employed consisting of multiple FC layers, with the number of neurons of each layer equal to the number of antennas of the corresponding terminal. To achieve satisfactory performance, we train PrototypeNet based on a customized loss function consisting of classification error and the power of latent vectors to satisfy transmit power constraints, with noise injection during training. Precoding matrices for each hop are then obtained by solving an optimization problem. We also propose a multiple-block extension when the number of antennas is limited. Numerical results verify that the proposed over-the-air transmission scheme can achieve satisfactory classification accuracy under a power constraint. The results also show that higher classification accuracy can be achieved with an increasing number of hops at a modest signal-to-noise ratio (SNR).


Electrical Load Forecasting over Multihop Smart Metering Networks with Federated Learning

Rahman, Ratun, Moriano, Pablo, Khan, Samee U., Nguyen, Dinh C.

arXiv.org Artificial Intelligence

--Electric load forecasting is essential for power management and stability in smart grids. This is mainly achieved via advanced metering infrastructure, where smart meters (SMs) record household energy data. Traditional machine learning (ML) methods are often employed for load forecasting but require data sharing which raises data privacy concerns. Federated learning (FL) can address this issue by running distributed ML models at local SMs without data exchange. However, current FL-based approaches struggle to achieve efficient load forecasting due to imbalanced data distribution across heterogeneous SMs. This paper presents a novel personalized federated learning (PFL) method for high-quality load forecasting in metering networks. A meta-learning-based strategy is developed to address data heterogeneity at local SMs in the collaborative training of local load forecasting models. Moreover, to minimize the load forecasting delays in our PFL model, we study a new latency optimization problem based on optimal resource allocation at SMs. A theoretical convergence analysis is also conducted to provide insights into FL design for federated load forecasting. Extensive simulations from real-world datasets show that our method outperforms existing approaches in terms of better load forecasting and reduced operational latency costs. Electrical load forecasting is crucial for power management in smart grids. This service is mainly supported via advanced metering infrastructure, where smart meters (SMs) record household energy consumption and share this data to the server of utility company [2]. This enables utility providers to estimate future electricity demands and thereby bolster grid reliability. Conventional load-forecasting techniques in machine learning (ML) and deep learning (DL) techniques utilize pattern-finding abilities to predict future outcomes.


Two Birds with One Stone: Multi-Task Semantic Communications Systems over Relay Channel

Cao, Yujie, Wu, Tong, Chen, Zhiyong, Xu, Yin, Tao, Meixia, Zhang, Wenjun

arXiv.org Artificial Intelligence

In this paper, we propose a novel multi-task, multi-link relay semantic communications (MTML-RSC) scheme that enables the destination node to simultaneously perform image reconstruction and classification with one transmission from the source node. In the MTML-RSC scheme, the source node broadcasts a signal using semantic communications, and the relay node forwards the signal to the destination. We analyze the coupling relationship between the two tasks and the two links (source-to-relay and source-to-destination) and design a semantic-focused forward method for the relay node, where it selectively forwards only the semantics of the relevant class while ignoring others. At the destination, the node combines signals from both the source node and the relay node to perform classification, and then uses the classification result to assist in decoding the signal from the relay node for image reconstructing. Experimental results demonstrate that the proposed MTML-RSC scheme achieves significant performance gains, e.g., $1.73$ dB improvement in peak-signal-to-noise ratio (PSNR) for image reconstruction and increasing the accuracy from $64.89\%$ to $70.31\%$ for classification.


Device Scheduling for Relay-assisted Over-the-Air Aggregation in Federated Learning

Zhang, Fan, Chen, Jining, Wang, Kunlun, Chen, Wen

arXiv.org Artificial Intelligence

Federated learning (FL) leverages data distributed at the edge of the network to enable intelligent applications. The efficiency of FL can be improved by using over-the-air computation (AirComp) technology in the process of gradient aggregation. In this paper, we propose a relay-assisted large-scale FL framework, and investigate the device scheduling problem in relay-assisted FL systems under the constraints of power consumption and mean squared error (MSE). we formulate a joint device scheduling, and power allocation problem to maximize the number of scheduled devices. We solve the resultant non-convex optimization problem by transforming the optimization problem into multiple sparse optimization problems. By the proposed device scheduling algorithm, these sparse sub-problems are solved and the maximum number of federated learning edge devices is obtained. The simulation results demonstrate the effectiveness of the proposed scheme as compared with other benchmark schemes.


Deep Reinforcement Learning-based Rebalancing Policies for Profit Maximization of Relay Nodes in Payment Channel Networks

Papadis, Nikolaos, Tassiulas, Leandros

arXiv.org Artificial Intelligence

Payment channel networks (PCNs) are a layer-2 blockchain scalability solution, with its main entity, the payment channel, enabling transactions between pairs of nodes "off-chain," thus reducing the burden on the layer-1 network. Nodes with multiple channels can serve as relays for multihop payments by providing their liquidity and withholding part of the payment amount as a fee. Relay nodes might after a while end up with one or more unbalanced channels, and thus need to trigger a rebalancing operation. In this paper, we study how a relay node can maximize its profits from fees by using the rebalancing method of submarine swaps. We introduce a stochastic model to capture the dynamics of a relay node observing random transaction arrivals and performing occasional rebalancing operations, and express the system evolution as a Markov Decision Process. We formulate the problem of the maximization of the node's fortune over time over all rebalancing policies, and approximate the optimal solution by designing a Deep Reinforcement Learning (DRL)-based rebalancing policy. We build a discrete event simulator of the system and use it to demonstrate the DRL policy's superior performance under most conditions by conducting a comparative study of different policies and parameterizations. Our work is the first to introduce DRL for liquidity management in the complex world of PCNs.


Graph Neural Network Based Node Deployment for Throughput Enhancement

Yang, Yifei, Zou, Dongmian, He, Xiaofan

arXiv.org Artificial Intelligence

The recent rapid growth in mobile data traffic entails a pressing demand for improving the throughput of the underlying wireless communication networks. Network node deployment has been considered as an effective approach for throughput enhancement which, however, often leads to highly non-trivial non-convex optimizations. Although convex approximation based solutions are considered in the literature, their approximation to the actual throughput may be loose and sometimes lead to unsatisfactory performance. With this consideration, in this paper, we propose a novel graph neural network (GNN) method for the network node deployment problem. Specifically, we fit a GNN to the network throughput and use the gradients of this GNN to iteratively update the locations of the network nodes. Besides, we show that an expressive GNN has the capacity to approximate both the function value and the gradients of a multivariate permutation-invariant function, as a theoretic support to the proposed method. To further improve the throughput, we also study a hybrid node deployment method based on this approach. To train the desired GNN, we adopt a policy gradient algorithm to create datasets containing good training samples. Numerical experiments show that the proposed methods produce competitive results compared to the baselines.


MILP, pseudo-boolean, and OMT solvers for optimal fault-tolerant placements of relay nodes in mission critical wireless networks

Chen, Quian Matteo, Finzi, Alberto, Mancini, Toni, Melatti, Igor, Tronci, Enrico

arXiv.org Artificial Intelligence

In critical infrastructures like airports, much care has to be devoted in protecting radio communication networks from external electromagnetic interference. Protection of such mission-critical radio communication networks is usually tackled by exploiting radiogoniometers: at least three suitably deployed radiogoniometers, and a gateway gathering information from them, permit to monitor and localise sources of electromagnetic emissions that are not supposed to be present in the monitored area. Typically, radiogoniometers are connected to the gateway through relay nodes. As a result, some degree of fault-tolerance for the network of relay nodes is essential in order to offer a reliable monitoring. On the other hand, deployment of relay nodes is typically quite expensive. As a result, we have two conflicting requirements: minimise costs while guaranteeing a given fault-tolerance. In this paper, we address the problem of computing a deployment for relay nodes that minimises the relay node network cost while at the same time guaranteeing proper working of the network even when some of the relay nodes (up to a given maximum number) become faulty (fault-tolerance). We show that, by means of a computation-intensive pre-processing on a HPC infrastructure, the above optimisation problem can be encoded as a 0/1 Linear Program, becoming suitable to be approached with standard Artificial Intelligence reasoners like MILP, PB-SAT, and SMT/OMT solvers. Our problem formulation enables us to present experimental results comparing the performance of these three solving technologies on a real case study of a relay node network deployment in areas of the Leonardo da Vinci Airport in Rome, Italy.


CARL-DTN: Context Adaptive Reinforcement Learning based Routing Algorithm in Delay Tolerant Network

Yesuf, Fuad Yimer, Prathap, M.

arXiv.org Artificial Intelligence

The term Delay/Disruption-Tolerant Networks (DTN) invented to describe and cover all types of long-delay, disconnected, intermittently connected networks, where mobility and outages or scheduled contacts may be experienced. This environment is characterized by frequent network partitioning, intermittent connectivity, large or variable delay, asymmetric data rate, and low transmission reliability. There have been routing protocols developed in DTN. However, those routing algorithms are design based upon specific assumptions. The assumption makes existing algorithms suitable for specific environment scenarios. Different routing algorithm uses different relay node selection criteria to select the replication node. Too Frequently forwarding messages can result in excessive packet loss and large buffer and network overhead. On the other hand, less frequent transmission leads to a lower delivery ratio. In DTN there is a trade-off off between delivery ratio and overhead. In this study, we proposed context-adaptive reinforcement learning based routing(CARL-DTN) protocol to determine optimal replicas of the message based on the real-time density. Our routing protocol jointly uses a real-time physical context, social-tie strength, and real-time message context using fuzzy logic in the routing decision. Multi-hop forwarding probability is also considered for the relay node selection by employing Q-Learning algorithm to estimate the encounter probability between nodes and to learn about nodes available in the neighbor by discounting reward. The performance of the proposed protocol is evaluated based on various simulation scenarios. The result shows that the proposed protocol has better performance in terms of message delivery ratio and overhead.